13 research outputs found

    Natural Language Processing Applications in Business

    Get PDF
    Increasing dependency of humans on computer-assisted systems has led to researchers focusing on more effective communication technologies that can mimic human interactions as well as understand natural languages and human emotions. The problem of information overload in every sector, including business, healthcare, education etc., has led to an increase in unstructured data, which is considered not to be useful. Natural language processing (NLP) in this context is one of the effective technologies that can be integrated with advanced technologies, such as machine learning, artificial intelligence, and deep learning, to improve the process of understanding and processing the natural language. This can enable human-computer interaction in a more effective way as well as allow for the analysis and formatting of large volumes of unusable and unstructured data/text in various industries. This will deliver meaningful outcomes that can enhance decision-making and thus improve operational efficiency. Focusing on this aspect, this chapter explains the concept of NLP, its history and development, while also reviewing its application in various industrial sectors

    Capturing Public Concerns about Coronavirus Using Arabic Tweets: An NLP-Driven Approach

    Get PDF
    This In order to analyze the people reactions and opinions about Coronavirus (COVID-19), there is a need for computational framework, which leverages machine learning (ML) and natural language processing (NLP) techniques to identify COVID tweets and further categorize these in to disease specific feelings to address societal concerns related to Safety, Worriedness, and Irony of COVID. This is an ongoing study, and the purpose of this paper is to demonstrate the initial results of determining the relevancy of the tweets and what Arabic speaking people were tweeting about the three disease related feelings/emotions about COVID: Safety, Worry, and Irony. A combination of ML and NLP techniques are used for determining what Arabic speaking people are tweeting about COVID. A two-stage classifier system was built to find relevant tweets about COVID, and then the tweets were categorized into three categories. Results indicated that the number of tweets by males and females were similar. The classification performance was high for relevancy (F=0.85), categorization (F=0.79). Our study has demonstrated how categories of discussion on Twitter about an epidemic can be discovered so that officials can understand specific societal concerns related to the emotions and feelings related to the epidemic

    Bridging the Gap Between Informal Learning Pedagogy and Multimodal Learning Analytics

    Get PDF
    Multimodal Learning is happening in different contexts, where different technologies are utilized. In such contexts, different modalities are used by learners in very unstructured ways. These modalities include video, audio, motion, eye tracking, to mention but a few. However, effective applications of Multimodal Learning Analytics remain challenging. Enabling educational technologies are underpinned by various pedagogical models that are designed to provide educational values of technology. Nevertheless, the link between Multimodal Learning Analytics and informal learning pedagogy is not very well established in the literature. Hence, this chapter aims at bridging the gap between multimodal learning analytics research concepts and approaches from one side and key informal learning pedagogical approaches such as self-directed learning and learning theories such as behaviorism and cognitivism from the other side. Establishing this link is expected to pave the ground for insightful and pedagogically informed learning analytics across multiple contexts. In addition, further explanations on what Multimodal learning analytics techniques and challenges are discussed to highlight the key concerns of MMLA applications

    Characterizing Visual Programming Approaches for End-User Developers: A Systematic Review

    Get PDF
    Recently many researches have explored the potential of visual programming in robotics, the Internet of Things (IoT), and education. However, there is a lack of studies that analyze the recent evidence-based visual programming approaches that are applied in several domains. This study presents a systematic review to understand, compare, and reflect on recent visual programming approaches using twelve dimensions: visual programming classification, interaction style, target users, domain, platform, empirical evaluation type, test participants\u27 type, number of test participants, test participants\u27 programming skills, evaluation methods, evaluation measures, and accessibility of visual programming tools. The results show that most of the selected articles discussed tools that target IoT and education, while other fields such as data science, robotics are emerging. Further, most tools use abstractions to hide implementation details and use similar interaction styles. The predominant platforms for the tools are web and mobile, while desktop-based tools are on the decline. Only a few tools were evaluated with a formal experiment, whilst the remaining ones were evaluated with evaluation studies or informal feedback. Most tools were evaluated with students with little to no programming skills. There is a lack of emphasis on usability principles in the design stage of the tools. Additionally, only one of the tools was evaluated for expressiveness. Other areas for exploration include supporting end users throughout the life cycle of applications created with the tools, studying the impact of tutorials on improving learnability, and exploring the potential of machine learning to improve debugging solutions developed with visual programming. © 2013 IEEE

    Embracing Technological Change in Higher Education

    Get PDF
    Access to information has never been easier, thanks to the rapid development of the internet and communication technologies, and the ubiquity of smartphones and other internet-enabled devices. In traditional classroom learning, teachers provide students with various sources of information that are known to be reliable. Nowadays, especially in a post-pandemic era, students increasingly rely on a host of resources available on the internet. Exposure to vast amounts of scattered information could adversely affect students’ learning process. Meanwhile, pedagogical approaches, classroom learning practices, and student learning activities have evolved significantly to cope with contemporary challenges. This study reviews the current learning practices and the technological interventions in a rapidly evolving higher education landscape. In particular, the challenges when integrating technology into higher education are considered in detail and ways put forward for doing so in that context

    Identifying Patient Experience from Online Resources via Sentiment Analysis and Topic Modelling Approaches

    No full text
    Understanding patient experience is important for healthcare service providers and online platforms such as websites and forums are excellent sources to collect patient feedback. Information from online sources can be vast and unstructured that may make

    Unlink the Link Between COVID-19 and 5G Networks: An NLP and SNA Based Approach

    No full text
    Social media facilitates rapid dissemination of information for both factual and fictional information. The spread of non-scientific information through social media platforms such as Twitter has potential to cause damaging consequences. Situations such as the COVID-19 pandemic provides a favourable environment for misinformation to thrive. The upcoming 5G technology is one of the recent victims of misinformation and fake news and has been plagued with misinformation about the effects of its radiation. During the COVID-19 pandemic, conspiracy theories linking the cause of the pandemic to 5G technology have resonated with a section of people leading to outcomes such as destructive attacks on 5G towers. The analysis of the social network data can help to understand the nature of the information being spread and identify the commonly occurring themes in the information. The natural language processing (NLP) and the statistical analysis of the social network data can empower policymakers to understand the misinformation being spread and develop targeted strategies to counter the misinformation. In this paper, NLP based analysis of tweets linking COVID-19 to 5G is presented. NLP models including Latent Dirichlet allocation (LDA), sentiment analysis (SA) and social network analysis (SNA) were applied for the analysis of the tweets and identification of topics. An understanding of the topic frequencies, the inter-relationships between topics and geographical occurrence of the tweets allows identifying agencies and patterns in the spread of misinformation and equips policymakers with knowledge to devise counter-strategies

    Automating Large-scale Health Care Service Feedback Analysis:Sentiment Analysis and Topic Modeling Study

    No full text
    BACKGROUND: Obtaining patient feedback is an essential mechanism for health care service providers to assess their quality and effectiveness. Unlike assessments of clinical outcomes, feedback from patients offers insights into their lived experiences. The Department of Health and Social Care in England via National Health Service Digital operates a patient feedback web service through which patients can leave feedback of their experiences in structured and free-text report forms. Free-text feedback, compared with structured questionnaires, may be less biased by the feedback collector and, thus, more representative; however, it is harder to analyze in large quantities and challenging to derive meaningful, quantitative outcomes. OBJECTIVE: The aim of this study is to build a novel data analysis and interactive visualization pipeline accessible through an interactive web application to facilitate the interrogation of and provide unique insights into National Health Service patient feedback. METHODS: This study details the development of a text analysis tool that uses contemporary natural language processing and machine learning models to analyze free-text clinical service reviews to develop a robust classification model and interactive visualization web application. The methodology is based on the design science research paradigm and was conducted in three iterations: a sentiment analysis of the patient feedback corpus in the first iteration, topic modeling (unigram and bigram)–based analysis for topic identification in the second iteration, and nested topic modeling in the third iteration that combines sentiment analysis and topic modeling methods. An interactive data visualization web application for use by the general public was then created, presenting the data on a geographic representation of the country, making it easily accessible. RESULTS: Of the 11,103 possible clinical services that could be reviewed across England, 2030 (18.28%) different services received a combined total of 51,845 reviews between October 1, 2017, and September 30, 2019. Dominant topics were identified for the entire corpus followed by negative- and positive-sentiment topics in turn. Reviews containing high- and low-sentiment topics occurred more frequently than reviews containing less polarized topics. Time-series analysis identified trends in topic and sentiment occurrence frequency across the study period. CONCLUSIONS: Using contemporary natural language processing techniques, unstructured text data were effectively characterized for further analysis and visualization. An efficient pipeline was successfully combined with a web application, making automated analysis and dissemination of large volumes of information accessible. This study represents a significant step in efforts to generate and visualize useful, actionable, and unique information from free-text patient reviews
    corecore